# Efficient Computing
Kanana Nano 2.1b Base
Kanana is a series of bilingual large language models developed by Kakao, excelling in Korean tasks while maintaining competitiveness in English tasks. The 2.1b version is the lightweight base model of this series.
Large Language Model
Transformers Supports Multiple Languages

K
kakaocorp
4,039
33
Kanana Nano 2.1b Instruct
Kanana is a bilingual (Korean/English) language model series developed by Kakao. This 2.1B parameter version outperforms similar models in Korean while maintaining efficient computational costs.
Large Language Model
Transformers Supports Multiple Languages

K
kakaocorp
5,994
59
Prosparse Llama 2 7b
A large language model based on LLaMA-2-7B with activation sparsification, achieving high sparsity (89.32%) while maintaining original performance through the ProSparse method
Large Language Model
Transformers English

P
SparseLLM
152
15
Molm 700M 4B
Apache-2.0
MoLM is a series of language models based on the Mixture of Experts (MoE) architecture. The 700M-4B version has a total of 4 billion parameters, with computational consumption equivalent to a dense model of 700 million parameters.
Large Language Model
Transformers

M
ibm-research
36
6
Featured Recommended AI Models